Formal Derivation of Mesh Neural Networks with Their Forward-Only Gradient Propagation

نویسندگان

چکیده

This paper proposes the Mesh Neural Network (MNN), a novel architecture which allows neurons to be connected in any topology, efficiently route information. In MNNs, information is propagated between throughout state transition function. State and error gradients are then directly computed from updates without backward computation. The MNN propagation schema formalized derived tensor algebra. proposed computational model can fully supply gradient descent process, potentially suitable for very large scale sparse NNs, due its expressivity training efficiency, with respect NNs based on back-propagation graphs.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Diagrammatic Derivation of Gradient Algorithms for Neural Networks

Deriving gradient algorithms for time-dependent neural network structures typically requires numerous chain rule expansions, diligent bookkeeping, and careful manipulation of terms. In this paper, we show how to use the principle of Network Reciprocity to derive such algorithms via a set of simple block diagram manipulation rules. The approach provides a common framework to derive popular algor...

متن کامل

Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks

We present an approach for the verification of feed-forward neural networks in which all nodes have a piece-wise linear activation function. Such networks are often used in deep learning and have been shown to be hard to verify for modern satisfiability modulo theory (SMT) and integer linear programming (ILP) solvers. The starting point of our approach is the addition of a global linear approxi...

متن کامل

Espresso: Efficient Forward Propagation for Binary Deep Neural Networks

There are many applications scenarios for which the computational performance and memory footprint of the prediction phase of Deep Neural Networks (DNNs) need to be optimized. Binary Deep Neural Networks (BDNNs) have been shown to be an effective way of achieving this objective. In this paper, we show how Convolutional Neural Networks (CNNs) can be implemented using binary representations. Espr...

متن کامل

Feed-forward Uncertainty Propagation in Belief and Neural Networks

We propose a feed-forward inference method applicable to belief and neural networks. In a belief network, the method estimates an approximate factorized posterior of all hidden units given the input. In neural networks the method propagates uncertainty of the input through all the layers. In neural networks with injected noise, the method analytically takes into account uncertainties resulting ...

متن کامل

3D Polygon Mesh Compression with Multi Layer Feed Forward Neural Networks

In this paper, an experiment is conducted which proves that multi layer feed forward neural networks are capable of compressing 3D polygon meshes. Our compression method not only preserves the initial accuracy of the represented object but also enhances it. The neural network employed includes the vertex coordinates, the connectivity and normal information in one compact form, converting the di...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Neural Processing Letters

سال: 2021

ISSN: ['1573-773X', '1370-4621']

DOI: https://doi.org/10.1007/s11063-021-10490-1